Goto

Collaborating Authors

 auxiliary kc


Representation Learning of Auxiliary Concepts for Improved Student Modeling and Exercise Recommendation

Badran, Yahya, Preisach, Christine

arXiv.org Artificial Intelligence

Personalized recommendation is a key feature of intelligent tutoring systems, typically relying on accurate models of student knowledge. Knowledge Tracing (KT) models enable this by estimating a student's mastery based on their historical interactions. Many KT models rely on human-annotated knowledge concepts (KCs), which tag each exercise with one or more skills or concepts believed to be necessary for solving it. However, these KCs can be incomplete, error-prone, or overly general. In this paper, we propose a deep learning model that learns sparse binary representations of exercises, where each bit indicates the presence or absence of a latent concept. We refer to these representations as auxiliary KCs. These representations capture conceptual structure beyond human-defined annotations and are compatible with both classical models (e.g., BKT) and modern deep learning KT architectures. We demonstrate that incorporating auxiliary KCs improves both student modeling and adaptive exercise recommendation. For student modeling, we show that augmenting classical models like BKT with auxiliary KCs leads to improved predictive performance. For recommendation, we show that using auxiliary KCs enhances both reinforcement learning-based policies and a simple planning-based method (expectimax), resulting in measurable gains in student learning outcomes within a simulated student environment.


Sparse Binary Representation Learning for Knowledge Tracing

Badran, Yahya, Preisach, Christine

arXiv.org Artificial Intelligence

Knowledge tracing (KT) models aim to predict students' future performance based on their historical interactions. Most existing KT models rely exclusively on human-defined knowledge concepts (KCs) associated with exercises. As a result, the effectiveness of these models is highly dependent on the quality and completeness of the predefined KCs. Human errors in labeling and the cost of covering all potential underlying KCs can limit model performance. In this paper, we propose a KT model, Sparse Binary Representation KT (SBRKT), that generates new KC labels, referred to as auxiliary KCs, which can augment the predefined KCs to address the limitations of relying solely on human-defined KCs. These are learned through a binary vector representation, where each bit indicates the presence (one) or absence (zero) of an auxiliary KC. The resulting discrete representation allows these auxiliary KCs to be utilized in training any KT model that incorporates KCs. Unlike pre-trained dense embeddings, which are limited to models designed to accept such vectors, our discrete representations are compatible with both classical models, such as Bayesian Knowledge Tracing (BKT), and modern deep learning approaches. To generate this discrete representation, SBRKT employs a binarization method that learns a sparse representation, fully trainable via stochastic gradient descent. Additionally, SBRKT incorporates a recurrent neural network (RNN) to capture temporal dynamics and predict future student responses by effectively combining the auxiliary and predefined KCs. Experimental results demonstrate that SBRKT outperforms the tested baselines on several datasets and achieves competitive performance on others. Furthermore, incorporating the learned auxiliary KCs consistently enhances the performance of BKT across all tested datasets.